Goto

Collaborating Authors

 nmse performance


Distributed Gossip-GAN for Low-overhead CSI Feedback Training in FDD mMIMO-OFDM Systems

Cao, Yuwen, Liu, Guijun, Ohtsuki, Tomoaki, Yang, Howard H., Quek, Tony Q. S.

arXiv.org Artificial Intelligence

The deep autoencoder (DAE) framework has turned out to be efficient in reducing the channel state information (CSI) feedback overhead in massive multiple-input multipleoutput (mMIMO) systems. However, these DAE approaches presented in prior works rely heavily on large-scale data collected through the base station (BS) for model training, thus rendering excessive bandwidth usage and data privacy issues, particularly for mMIMO systems. When considering users' mobility and encountering new channel environments, the existing CSI feedback models may often need to be retrained. Returning back to previous environments, however, will make these models perform poorly and face the risk of catastrophic forgetting. To solve the above challenging problems, we propose a novel gossiping generative adversarial network (Gossip-GAN)-aided CSI feedback training framework. Notably, Gossip-GAN enables the CSI feedback training with low-overhead while preserving users' privacy. Specially, each user collects a small amount of data to train a GAN model. Meanwhile, a fully distributed gossip-learning strategy is exploited to avoid model overfitting, and to accelerate the model training as well. Simulation results demonstrate that Gossip-GAN can i) achieve a similar CSI feedback accuracy as centralized training with real-world datasets, ii) address catastrophic forgetting challenges in mobile scenarios, and iii) greatly reduce the uplink bandwidth usage. Besides, our results show that the proposed approach possesses an inherent robustness.


A Wireless Foundation Model for Multi-Task Prediction

Sheng, Yucheng, Wang, Jiacheng, Zhou, Xingyu, Liang, Le, Ye, Hao, Jin, Shi, Li, Geoffrey Ye

arXiv.org Artificial Intelligence

--With the growing complexity and dynamics of the mobile communication networks, accurately predicting key system parameters, such as channel state information (CSI), user location, and network traffic, has become essential for a wide range of physical (PHY)-layer and medium access control (MAC)-layer tasks. Although traditional deep learning (DL)- based methods have been widely applied to such prediction tasks, they often struggle to generalize across different scenarios and tasks. In response, we propose a unified foundation model for multi-task prediction in wireless networks that supports diverse prediction intervals. The proposed model enforces univariate decomposition to unify heterogeneous tasks, encodes granularity for interval awareness, and uses a causal Transformer backbone for accurate predictions. Additionally, we introduce a patch masking strategy during training to support arbitrary input lengths. After trained on large-scale datasets, the proposed foundation model demonstrates strong generalization to unseen scenarios and achieves zero-shot performance on new tasks that surpass traditional full-shot baselines. HE advent of 6G communications [1] has made wireless systems more intricate, featuring ultra-dense deployments, diverse service demands, and highly dynamic environments. Efficient execution of physical (PHY) and medium access control (MAC)-layer tasks require accurate and timely knowledge of the surrounding communication environment. Key parameters of interest include channel state information (CSI) [2], user locations [3], mobile traffic at the base station (BS) [4], etc. However, these parameters fluctuate rapidly over time, making real-time estimation and feedback particularly challenging. As a result, accurately predicting these variables has become essential for enabling a wide range of downstream communication tasks.


Physics-Inspired Deep Learning Anti-Aliasing Framework in Efficient Channel State Feedback

Lin, Yu-Chien, Xin, Yan, Lee, Ta-Sung, Charlie, null, Zhang, null, Ding, Zhi

arXiv.org Artificial Intelligence

Acquiring downlink channel state information (CSI) at the base station is vital for optimizing performance in massive Multiple input multiple output (MIMO) Frequency-Division Duplexing (FDD) systems. While deep learning architectures have been successful in facilitating UE-side CSI feedback and gNB-side recovery, the undersampling issue prior to CSI feedback is often overlooked. This issue, which arises from low density pilot placement in current standards, results in significant aliasing effects in outdoor channels and consequently limits CSI recovery performance. To this end, this work introduces a new CSI upsampling framework at the gNB as a post-processing solution to address the gaps caused by undersampling. Leveraging the physical principles of discrete Fourier transform shifting theorem and multipath reciprocity, our framework effectively uses uplink CSI to mitigate aliasing effects. We further develop a learning-based method that integrates the proposed algorithm with the Iterative Shrinkage-Thresholding Algorithm Net (ISTA-Net) architecture, enhancing our approach for non-uniform sampling recovery. Our numerical results show that both our rule-based and deep learning methods significantly outperform traditional interpolation techniques and current state-of-the-art approaches in terms of performance.


Universal Auto-encoder Framework for MIMO CSI Feedback

So, Jinhyun, Kwon, Hyukjoon

arXiv.org Artificial Intelligence

Existing auto-encoder (AE)-based channel state information (CSI) frameworks have focused on a specific configuration of user equipment (UE) and base station (BS), and thus the input and output sizes of the AE are fixed. However, in the real-world scenario, the input and output sizes may vary depending on the number of antennas of the BS and UE and the allocated resource block in the frequency dimension. A naive approach to support the different input and output sizes is to use multiple AE models, which is impractical for the UE due to the limited HW resources. In this paper, we propose a universal AE framework that can support different input sizes and multiple compression ratios. The proposed AE framework significantly reduces the HW complexity while providing comparable performance in terms of compression ratio-distortion trade-off compared to the naive and state-of-the-art approaches.


Impact of Free-carrier Nonlinearities on Silicon Microring-based Reservoir Computing

Castro, Bernard J. Giron, Peucheret, Christophe, Zibar, Darko, Da Ros, Francesco

arXiv.org Artificial Intelligence

We quantify the impact of thermo-optic and free-carrier effects on time-delay reservoir computing using a silicon microring resonator. We identify pump power and frequency detuning ranges with NMSE less than 0.05 for the NARMA-10 task depending on the time constants of the two considered effects.


DDPG-Driven Deep-Unfolding with Adaptive Depth for Channel Estimation with Sparse Bayesian Learning

Hu, Qiyu, Shi, Shuhan, Cai, Yunlong, Yu, Guanding

arXiv.org Artificial Intelligence

Deep-unfolding neural networks (NNs) have received great attention since they achieve satisfactory performance with relatively low complexity. Typically, these deep-unfolding NNs are restricted to a fixed-depth for all inputs. However, the optimal number of layers required for convergence changes with different inputs. In this paper, we first develop a framework of deep deterministic policy gradient (DDPG)-driven deep-unfolding with adaptive depth for different inputs, where the trainable parameters of deep-unfolding NN are learned by DDPG, rather than updated by the stochastic gradient descent algorithm directly. Specifically, the optimization variables, trainable parameters, and architecture of deep-unfolding NN are designed as the state, action, and state transition of DDPG, respectively. Then, this framework is employed to deal with the channel estimation problem in massive multiple-input multiple-output systems. Specifically, first of all we formulate the channel estimation problem with an off-grid basis and develop a sparse Bayesian learning (SBL)-based algorithm to solve it. Secondly, the SBL-based algorithm is unfolded into a layer-wise structure with a set of introduced trainable parameters. Thirdly, the proposed DDPG-driven deep-unfolding framework is employed to solve this channel estimation problem based on the unfolded structure of the SBL-based algorithm. To realize adaptive depth, we design the halting score to indicate when to stop, which is a function of the channel reconstruction error. Furthermore, the proposed framework is extended to realize the adaptive depth of the general deep neural networks (DNNs). Simulation results show that the proposed algorithm outperforms the conventional optimization algorithms and DNNs with fixed depth with much reduced number of layers.


Hyper-Parameter Auto-Tuning for Sparse Bayesian Learning

Gao, Dawei, Guo, Qinghua, Jin, Ming, Liao, Guisheng, Eldar, Yonina C.

arXiv.org Artificial Intelligence

Choosing the values of hyper-parameters in sparse Bayesian learning (SBL) can significantly impact performance. However, the hyper-parameters are normally tuned manually, which is often a difficult task. Most recently, effective automatic hyper-parameter tuning was achieved by using an empirical auto-tuner. In this work, we address the issue of hyper-parameter auto-tuning using neural network (NN)-based learning. Inspired by the empirical auto-tuner, we design and learn a NN-based auto-tuner, and show that considerable improvement in convergence rate and recovery performance can be achieved.